Centralized and Distributed Online Learning for Sparse Time-Varying Optimization

نویسندگان

چکیده

The development of online algorithms to track time-varying systems has drawn a lot attention in the last years, particular framework convex optimization. Meanwhile, sparse optimization emerged as powerful tool deal with widespread applications, ranging from dynamic compressed sensing parsimonious system identification. In most literature on problems, some prior information system's evolution is assumed be available. contrast, this article, we propose an learning approach, which does not employ given model and suitable for adversarial frameworks. Specifically, develop centralized distributed algorithms, theoretically analyze them terms regret, perspective. Furthermore, numerical experiments that illustrate their practical effectiveness.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Distributed vs. Centralized Particle Swarm Optimization for Learning Flocking Behaviors

In this paper we address the automatic synthesis of controllers for the coordinated movement of multiple mobile robots. We use a noise-resistant version of Particle Swarm Optimization to learn in simulation a set of 50 weights of a plastic artificial neural network. Two learning strategies are applied: homogeneous centralized learning, in which every robot runs the same controller and the perfo...

متن کامل

Cheat-proof Playout for Centralized and Distributed Online Games

We explore exploits possible for cheating in real-time, multiplayer games for both client-server and distributed, serverless architectures. We offer the first formalization of cheating in online games and propose an initial set of strong solutions. We propose a protocol that has provable anti-cheating guarantees, but suffers a performance penalty. We then develop an extended version of this pro...

متن کامل

Online Learning for Matrix Factorization and Sparse Coding Online Learning for Matrix Factorization and Sparse Coding

Sparse coding—that is, modelling data vectors as sparse linear combinations of basis elements—is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non...

متن کامل

Stabilized Sparse Online Learning for Sparse Data

Stochastic gradient descent (SGD) is commonly used for optimization in large-scale machine learning problems. Langford et al. (2009) introduce a sparse online learning method to induce sparsity via truncated gradient. With high-dimensional sparse data, however, this method suffers from slow convergence and high variance due to heterogeneity in feature sparsity. To mitigate this issue, we introd...

متن کامل

Centralized and Distributed Newton Methods for Network Optimization and Extensions

We consider Newton methods for common types of single commodity and multi-commodity network flow problems. Despite the potentially very large dimension of the problem, they can be implemented using the conjugate gradient method and low-dimensional network operations, as shown nearly thirty years ago. We revisit these methods, compare them to more recent proposals, and describe how they can be i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2021

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2020.3010242